AdaTask: A Task-Aware Adaptive Learning Rate Approach to Multi-Task Learning

نویسندگان

چکیده

Multi-task learning (MTL) models have demonstrated impressive results in computer vision, natural language processing, and recommender systems. Even though many approaches been proposed, how well these balance different tasks on each parameter still remains unclear. In this paper, we propose to measure the task dominance degree of a by total updates parameter. Specifically, compute exponentially decaying Average squared Updates (AU) from corresponding task. Based novel metric, observe that parameters existing MTL methods, especially those higher shared layers, are dominated one or several tasks. The AU is mainly due accumulative gradients Motivated this, Task-wise Adaptive rate approach, AdaTask short, separate hence for adaptive (e.g., AdaGrad, RMSProp, Adam). Comprehensive experiments vision system datasets demonstrate significantly improves performance tasks, resulting SOTA average task-wise performance. Analysis both synthetic real-world shows every layer well.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Smoothed Online Multi-Task Learning

This paper addresses the challenge of jointly learning both the per-task model parameters and the inter-task relationships in a multi-task online learning setting. The proposed algorithm features probabilistic interpretation, efficient updating rules and flexible modulation on whether learners focus on their specific task or on jointly address all tasks. The paper also proves a sub-linear regre...

متن کامل

Active Learning for Multi-Task Adaptive Filtering

In this paper, we propose an Active Learning (AL) framework for the Multi-Task Adaptive Filtering (MTAF) problem. Specifically, we explore AL approaches to rapidly improve an MTAF system, based on Dirichlet Process priors, with minimal user/task-level feedback. The proposed AL approaches select instances for delivery with a two-fold objective: 1) Improve future task-specific system performance ...

متن کامل

Learning Multi-Level Task Groups in Multi-Task Learning

In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information across them. Many MTL algorithms have been proposed to learn the underlying task groups. However, those methods are limited to learn the task groups at only a single level, which may be not sufficient to model the complex structure among tasks in many real-world applications. In this paper, we propos...

متن کامل

Graphical Multi-Task Learning

We investigate multi-task learning in a setting where relationships between tasks are modeled by a graph structure. Most existing methods treat all pairs of tasks as being equally related, which can be hurt performance when the true structure of task relationships is more complex. Our method uses regularization to encourage models for task pairs to be similar whenever they are connected in the ...

متن کامل

Federated Multi-Task Learning

Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i9.26275